47 research outputs found

    A robust mixed-norm adaptive filter algorithm

    Get PDF
    We propose a new member of the family of mixed-norm stochastic gradient adaptive filter algorithms for system identification applications based upon a convex function of the error norms that underlie the least mean square (LMS) and least absolute difference (LAD) algorithms. A scalar parameter controls the mixture and relates, approximately, to the probability that the instantaneous desired response of the adaptive filter does not contain significant impulsive noise. The parameter is calculated with the complementary error function and a robust estimate of the standard deviation of the desired response. The performance of the proposed algorithm is demonstrated in a system identification simulation with impulsive and Gaussian measurement nois

    A novel blind equalization structure for deep null communication channels

    Get PDF
    A new blind equalization structure that is well suited for communication channels whose zeros are close to the unit circle is proposed. Most blind equalizers which operate at the baud rate perform poorly for channels whose maximum phase zeros are close to the unit circle. This limitation is mainly due to the inability to model the inverse of such maximum phase zeros with a finite length filter. Our proposed structure adaptively models the inverse channel, completely, without the need to transmit a training sequence. Therefore Inter Symbol Interference (ISI) is removed even if the channel has deep spectral nulls. Another attractive feature of this structure is that it estimates the channel parameters directly, and as such may be used with “indirect” equalization techniques. Simulation studies are included to demonstrate the performance of the schem

    Toward an optimal PRNN-based nonlinear predictor

    Get PDF
    We present an approach for selecting optimal parameters for the pipelined recurrent neural network (PRNN) in the paradigm of nonlinear and nonstationary signal prediction. We consider the role of nesting, which is inherent to the PRNN architecture. The corresponding number of nested modules needed for a certain prediction task, and their contribution toward the final prediction gain give a thorough insight into the way the PRNN performs, and offers solutions for optimization of its parameters. In particular, nesting allows the forgetting factor in the cost function of the PRNN to exceed unity, hence it becomes an emphasis factor. This compensates for the small contribution of the distant modules to the prediction process, due to nesting, and helps to circumvent the problem of vanishing gradient, experienced in RNNs for prediction. The PRNN is shown to outperform the linear least mean square and recursive least squares predictors, as well as previously proposed PRNN schemes, at no expense of additional computational complexit

    Nonlinear adaptive prediction of speech with a pipelined recurrent neural network

    Get PDF
    New learning algorithms for an adaptive nonlinear forward predictor that is based on a pipelined recurrent neural network (PRNN) are presented. A computationally efficient gradient descent (GD) learning algorithm, together with a novel extended recursive least squares (ERLS) learning algorithm, are proposed. Simulation studies based on three speech signals that have been made public and are available on the World Wide Web (WWW) are used to test the nonlinear predictor. The gradient descent algorithm is shown to yield poor performance in terms of prediction error gain, whereas consistently improved results are achieved with the ERLS algorithm. The merit of the nonlinear predictor structure is confirmed by yielding approximately 2 dB higher prediction gain than a linear structure predictor that employs the conventional recursive least squares (RLS) algorithm

    A blind lag-hopping adaptive channel shortening algorithm based upon squared auto-correlation minimization (LHSAM)

    Get PDF
    Recent analytical results due to Walsh, Martin and Johnson showed that optimizing the single lag autocorrelation minimization (SLAM) cost does not guarantee convergence to high signal to interference ratio (SIR), an important metric in channel shortening applications. We submit that we can overcome this potential limitation of the SLAM algorithm and retain its computational complexity advantage by minimizing the square of single autocorrelation value with randomly selected lag. Our proposed lag-hopping adaptive channel shortening algorithm based upon squared autocorrelation minimization (LHSAM) has, therefore, low complexity as in the SLAM algorithm and, more importantly, a low average LHSAM cost can guarantee to give a high SIR as for the SAM algorithm. Simulation studies are included to confirm the performance of the LHSAM algorithm

    On stability of relaxive systems described by polynomials with time-variant coefficients

    Get PDF
    The problem of global asymptotic stability (GAS) of a time-variant m-th order difference equation y(n)=aT(n)y(n-1)=a1(n)y(n-1)+···+am(n)y(n-m) for ||a(n)||1<1 was addressed, whereas the case ||a(n)||1=1 has been left as an open question. Here, we impose the condition of convexity on the set C0 of the initial values y(n)=[y(n-1),...,y(n-m)]T εRm and on the set AεRm of all allowable values of a(n)=[a1(n),...,am(n)]T, and derive the results from [1] for ai≥0, i=1,...,n, as a pure consequence of convexity of the sets C0 and A. Based upon convexity and the fixed-point iteration (FPI) technique, further GAS results for both ||a(n)||i<1, and ||a(n)||1=1 are derived. The issues of convergence in norm, and geometric convergence are tackled

    On the choice of parameters of the cost function in nested modular RNN's

    Get PDF
    We address the choice of the coefficients in the cost function of a modular nested recurrent neural-network (RNN) architecture, known as the pipelined recurrent neural network (PRNN). Such a network can cope with the problem of vanishing gradient, experienced in prediction with RNN’s. Constraints on the coefficients of the cost function, in the form of a vector norm, are considered. Unlike the previous cost function for the PRNN, which included a forgetting factor motivated by the recursive least squares (RLS) strategy, the proposed forms of cost function provide “forgetting” of the outputs of adjacent modules based upon the network architecture. Such an approach takes into account the number of modules in the PRNN, through the unit norm constraint on the coefficients of the cost function of the PRNN. This is shown to be particularly suitable, since due to inherent nesting in the PRNN, every module gives its full contribution to the learning process, whereas the unit norm constrained cost function introduces a sense of forgetting in the memory management of the PRNN. The PRNN based upon a modified cost function outperforms existing PRNN schemes in the time series prediction simulations presented

    Global asymptotic convergence of nonlinear relaxation equations realised through a recurrent perceptron

    Get PDF
    Conditions for global asymptotic stability (GAS) of a nonlinear relaxation equation realised by a nonlinear autoregressive moving average (NARMA) recurrent perceptron are provided. Convergence is derived through fixed point iteration (FPI) techniques, based upon a contraction mapping feature of a nonlinear activation function of a neuron. Furthermore, nesting is shown to be a spatial interpretation of an FPI, which underpins a pipelined recurrent neural network (PRNN) for nonlinear signal processin

    An enhanced NAS-RIF algorithm for blind image deconvolution

    Get PDF
    We enhance the performance of the nonnegativity and support constraints recursive inverse filtering (NAS-RIF) algorithm for blind image deconvolution. The original cost function is modified to overcome the problem of operation on images with different scales for the representation of pixel intensity levels. Algorithm resetting is used to enhance the convergence of the conjugate gradient algorithm. A simple pixel classification approach is used to automate the selection of the support constraint. The performance of the resulting enhanced NAS-RIF algorithm is demonstrated on various image

    From an a priori RNN to an a posteriori PRNN nonlinear predictor

    Get PDF
    We provide an analysis of nonlinear time series prediction schemes, from a common recurrent neural network (RNN) to the pipelined recurrent neural network (PRNN), which consists of a number of nested small-scale RNNs. All these schemes are shown to be suitable for nonlinear autoregressive moving average (NARMA) prediction. The time management policy of such prediction schemes is addressed and classified in terms of a priori and a posteriori mode of operation. Moreover, it is shown that the basic a priori PRNN structure exhibits certain a posteriori features. In search for an optimal PRNN based predictor, some inherent features of the PRNN, such as nesting and the choice of cost function are addressed. It is shown that nesting in essence is an a posteriori technique which does not diverge. Simulations undertaken on a speech signal support the algorithms derived, and outperform linear least mean square and recursive least squared predictor
    corecore